Dynamic tunneling based regularization in feedforward neural networks
نویسندگان
چکیده
منابع مشابه
Regularization parameter estimation for feedforward neural networks
Under the framework of the Kullback-Leibler (KL) distance, we show that a particular case of Gaussian probability function for feedforward neural networks (NNs) reduces into the first-order Tikhonov regularizer. The smooth parameter in kernel density estimation plays the role of regularization parameter. Under some approximations, an estimation formula is derived for estimating regularization p...
متن کاملFeedforward Neural Networks
Here x is an input, y is a “label”, v ∈ Rd is a parameter vector, and f(x, y) ∈ Rd is a feature vector that corresponds to a representation of the pair (x, y). Log-linear models have the advantage that the feature vector f(x, y) can include essentially any features of the pair (x, y). However, these features are generally designed by hand, and in practice this is a limitation. It can be laborio...
متن کاملFeedforward Neural Networks
Feedforward neural networks have been used to perform classifications and to learn functional mappings. This paper compares human performance to feedforward neural networks using back propagation in generating functional relationships from limited data. Many business judgments are made in situations where decision makers are required to infer relationships porn partial, incomplete, and conflict...
متن کاملEvolving Neural Feedforward Networks
For many practical problem domains the use of neural networks has led to very satisfactory results. Nevertheless the choice of an appropriate, problem specific network architecture still remains a very poorly understood task. Given an actual problem, one can choose a few different architectures, train the chosen architectures a few times and finally select the architecturewith the best behaviou...
متن کاملLearning Stochastic Feedforward Neural Networks
Multilayer perceptrons (MLPs) or neural networks are popular models used for nonlinear regression and classification tasks. As regressors, MLPs model the conditional distribution of the predictor variables Y given the input variables X . However, this predictive distribution is assumed to be unimodal (e.g. Gaussian). For tasks involving structured prediction, the conditional distribution should...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Artificial Intelligence
سال: 2001
ISSN: 0004-3702
DOI: 10.1016/s0004-3702(01)00112-6